我们研究了利润率的二元和多类分类器的精确积极学习。给定一个$ n $ - 点集$ x \ subset \ mathbb {r}^m $,我们想在$ x $上学习任何未知分类器,其类具有有限的strong convex hull保证金,这是一个扩展SVM保证金的新概念。在标准的主动学习环境中,只有标签查询,在最坏的情况下学习具有强凸额的分类器$ \ gamma $需要$ \ omega \ big(1+ \ frac {1} {\ gamma} {\ gamma} \ big big )^{(M-1)/2} $查询。另一方面,使用更强大的种子查询(一种等价查询的变体),可以通过littlestone's缩小算法在$ o(m \ log n)$ Queries中学习目标分类器;但是,减半在计算上效率低下。在这项工作中,我们表明,通过仔细组合两种类型的查询,可以在时间上学习二进制分类器$ \ operatatorName {poly}(n+m)$,仅使用$ o(m^2 \ log n)$ label查询和$ o \ big(m \ log \ frac {m} {\ gamma} \ big)$ seed queries;结果以$ k!k^2 $乘法开销的价格扩展到$ k $ class分类器。当输入点具有界限的位复杂性时,或者仅一个类具有强凸壳边缘时,相似的结果就成立了。我们通过证明在最坏的情况下任何算法需要$ \ omega \ big(k m \ log \ frac {1} {\ gamma} \ big)$ seed $ seed和标签质量质量来学习$ k $ -Class classifier具有强大的凸壳保证金$ \ gamma $。
translated by 谷歌翻译
在机器学习中最大化的是一项基本任务,在本文中,我们研究了经典的Matroid约束下的删除功能强大版本。在这里,目标是提取数据集的小尺寸摘要,即使在对手删除了一些元素之后,该数据集包含高价值独立集。我们提出了恒定因素近似算法,其空间复杂性取决于矩阵的等级$ k $和已删除元素的数字$ d $。在集中式设置中,我们提出$(4.597+o(\ varepsilon))$ - 近似算法,带有摘要大小$ o(\ frac {k+d} {\ varepsilon^2} \ log \ log \ frac \ frac {k} })$将$(3.582 + o(\ varepsilon))$(k + \ frac {d} {\ varepsilon^2} \ log \ frac {k} {k} {\ varepsilon}) $摘要大小是单调的。在流设置中,我们提供$(9.435 + o(\ varepsilon))$ - 带有摘要大小和内存$ o的近似算法$(k + \ frac {d} {\ varepsilon^2} \ log \ log \ frac {k} {k} {k} {k} {k} {k} { \ varepsilon})$;然后,将近似因子提高到单调盒中的$(5.582+o(\ varepsilon))$。
translated by 谷歌翻译
TensorFlow GNN(TF-GNN)是张量曲线的图形神经网络的可扩展库。它是从自下而上设计的,以支持当今信息生态系统中发生的丰富的异质图数据。Google的许多生产模型都使用TF-GNN,最近已作为开源项目发布。在本文中,我们描述了TF-GNN数据模型,其KERAS建模API以及相关功能,例如图形采样,分布式训练和加速器支持。
translated by 谷歌翻译
我们在$ d $ dimensional Euclidean Space中研究私人$ k $ -Median和$ k $ -means聚集问题。通过利用树的嵌入,我们提供了一种有效且易于实现的算法,该算法在非私人方法的经验上具有竞争力。我们证明我们的方法计算一个最多$ o(d^{3/2} \ log n)\ cdot opt + o(k d^2 \ log^2 n / \ epsilon^2)$的解决方案,其中$ \ Epsilon $是隐私担保。 (使用标准尺寸缩小技术可以用$ o(\ log k)$替换尺寸项,$ d $。)尽管最坏的案例保证比最先进的私人聚类方法的状态更糟糕,但算法是我们建议是实用的,以接近线性的方式运行,$ \ tilde {o}(nkd)$,时间和比例为数千万分。我们还表明,我们的方法适合在大规模分布式计算环境中并行化。特别是我们表明,我们的私人算法可以在sublinear内存制度中的对数MPC弹奏数中实现。最后,我们通过经验评估来补充理论分析,证明了该算法与其他隐私聚类基线相比的效率和准确性。
translated by 谷歌翻译
随机漫游是许多机器学习算法中使用的基本原语,其中包括聚类和半监督学习中的几种应用。尽管他们的相关性,但最近推出了第一个计算随机散步的有效并行算法(Lacki等人)。不幸的是,他们的方法具有基本缺点:它们的算法是非本地的,因为它严重依赖于计算随机从输入图中的所有节点中散布,即使在许多实际应用中只对计算随机只能从一个小子集中散步感兴趣图中的节点。在本文中,我们介绍了一种新的算法,通过同时建立随机和本地的随机行走来克服这种限制。我们表明我们的技术既存储器也又高效,特别是产生有效的并行本地聚类算法。最后,我们将我们的理论分析补充了实验结果,表明我们的算法比以前的方法更可扩展。
translated by 谷歌翻译
Multi-view projection techniques have shown themselves to be highly effective in achieving top-performing results in the recognition of 3D shapes. These methods involve learning how to combine information from multiple view-points. However, the camera view-points from which these views are obtained are often fixed for all shapes. To overcome the static nature of current multi-view techniques, we propose learning these view-points. Specifically, we introduce the Multi-View Transformation Network (MVTN), which uses differentiable rendering to determine optimal view-points for 3D shape recognition. As a result, MVTN can be trained end-to-end with any multi-view network for 3D shape classification. We integrate MVTN into a novel adaptive multi-view pipeline that is capable of rendering both 3D meshes and point clouds. Our approach demonstrates state-of-the-art performance in 3D classification and shape retrieval on several benchmarks (ModelNet40, ScanObjectNN, ShapeNet Core55). Further analysis indicates that our approach exhibits improved robustness to occlusion compared to other methods. We also investigate additional aspects of MVTN, such as 2D pretraining and its use for segmentation. To support further research in this area, we have released MVTorch, a PyTorch library for 3D understanding and generation using multi-view projections.
translated by 谷歌翻译
With the recent advances in video and 3D understanding, novel 4D spatio-temporal challenges fusing both concepts have emerged. Towards this direction, the Ego4D Episodic Memory Benchmark proposed a task for Visual Queries with 3D Localization (VQ3D). Given an egocentric video clip and an image crop depicting a query object, the goal is to localize the 3D position of the center of that query object with respect to the camera pose of a query frame. Current methods tackle the problem of VQ3D by lifting the 2D localization results of the sister task Visual Queries with 2D Localization (VQ2D) into a 3D reconstruction. Yet, we point out that the low number of Queries with Poses (QwP) from previous VQ3D methods severally hinders their overall success rate and highlights the need for further effort in 3D modeling to tackle the VQ3D task. In this work, we formalize a pipeline that better entangles 3D multiview geometry with 2D object retrieval from egocentric videos. We estimate more robust camera poses, leading to more successful object queries and substantially improved VQ3D performance. In practice, our method reaches a top-1 overall success rate of 86.36% on the Ego4D Episodic Memory Benchmark VQ3D, a 10x improvement over the previous state-of-the-art. In addition, we provide a complete empirical study highlighting the remaining challenges in VQ3D.
translated by 谷歌翻译
The understanding capabilities of current state-of-the-art 3D models are limited by datasets with a small number of annotated data and a pre-defined set of categories. In its 2D counterpart, recent advances have shown that similar problems can be significantly alleviated by employing knowledge from other modalities, such as language. Inspired by this, leveraging multimodal information for 3D modality could be promising to improve 3D understanding under the restricted data regime, but this line of research is not well studied. Therefore, we introduce ULIP to learn a unified representation of image, text, and 3D point cloud by pre-training with object triplets from the three modalities. To overcome the shortage of training triplets, ULIP leverages a pre-trained vision-language model that has already learned a common visual and textual space by training with massive image-text pairs. Then, ULIP learns a 3D representation space aligned with the common image-text space, using a small number of automatically synthesized triplets. ULIP is agnostic to 3D backbone networks and can easily be integrated into any 3D architecture. Experiments show that ULIP effectively improves the performance of multiple recent 3D backbones by simply pre-training them on ShapeNet55 using our framework, achieving state-of-the-art performance in both standard 3D classification and zero-shot 3D classification on ModelNet40 and ScanObjectNN. ULIP also improves the performance of PointMLP by around 3% in 3D classification on ScanObjectNN, and outperforms PointCLIP by 28.8% on top-1 accuracy for zero-shot 3D classification on ModelNet40. Our code and pre-trained models will be released.
translated by 谷歌翻译
A tractogram is a virtual representation of the brain white matter. It is composed of millions of virtual fibers, encoded as 3D polylines, which approximate the white matter axonal pathways. To date, tractograms are the most accurate white matter representation and thus are used for tasks like presurgical planning and investigations of neuroplasticity, brain disorders, or brain networks. However, it is a well-known issue that a large portion of tractogram fibers is not anatomically plausible and can be considered artifacts of the tracking procedure. With Verifyber, we tackle the problem of filtering out such non-plausible fibers using a novel fully-supervised learning approach. Differently from other approaches based on signal reconstruction and/or brain topology regularization, we guide our method with the existing anatomical knowledge of the white matter. Using tractograms annotated according to anatomical principles, we train our model, Verifyber, to classify fibers as either anatomically plausible or non-plausible. The proposed Verifyber model is an original Geometric Deep Learning method that can deal with variable size fibers, while being invariant to fiber orientation. Our model considers each fiber as a graph of points, and by learning features of the edges between consecutive points via the proposed sequence Edge Convolution, it can capture the underlying anatomical properties. The output filtering results highly accurate and robust across an extensive set of experiments, and fast; with a 12GB GPU, filtering a tractogram of 1M fibers requires less than a minute. Verifyber implementation and trained models are available at https://github.com/FBK-NILab/verifyber.
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译